20 research outputs found

    Characterizing Result Errors in Internet Desktop Grids

    Get PDF
    Desktop grids use the free resources in Intranet and Internet environments for large-scale computation and storage. While desktop grids offer a high return on investment, one critical issue is the validation of results returned by participating hosts. Several mechanisms for result validation have been previously proposed. However, the characterization of errors is poorly understood. To study error rates, we implemented and deployed a desktop grid application across several thousand hosts distributed over the Internet. We then analyzed the results to give quantitative, empirical characterization of errors rates. We find that in practice, error rates are widespread across hosts but occur relatively infrequently. Moreover, we find that error rates tend to not be stationary over time nor correlated between hosts. In light of these characterization results, we evaluated state-of-the-art error detection mechanisms and describe the trade-offs for using each mechanism. Finally, based on our empirical results, we conduct a benefit analysis of a recently proposed mechanism for error detection tailored for long-running applications. This mechanism is based on using the digest of intermediate checkpoints, and we show in theory and simulation that the relative benefit of this method compared to the state-of-the-art is as high as 45\%

    DSL-Lab: a Platform to Experiment on Domestic Broadband Internet

    Get PDF
    This report presents the design and building of DSL-Lab, a platform for distributed computing and peer-to-peer experiments over the domestic broadband Internet. Experimental platforms such as PlanetLab and Grid'5000 are promising methodological approaches for studying distributed systems. However, both platforms focus on high-end services and network deployments on only a restricted part of the Internet, and as such, they do not provide experimental conditions of residential broadband networks. DSL-Lab is composed of 40 low-power and noiseless nodes, which are hosted by participants, using users' xDSL or cable access to the Internet. The objective is twofold: 1) to provide accurate and customized measures of availability, activity and performance in order to characterize and tune the models of such resources~; 2) to provide an experimental platform for new protocols, services and applications, as well as a validation tool for simulators and emulators targeting these systems. In this article, we report on the software infrastructure (security, resources allocation, power management) as well as on the first results and experiments achieved

    DSL-Lab: a Low-power Lightweight Platform to Experiment on Domestic Broadband Internet

    Get PDF
    International audienceThis article presents the design and building of DSL-Lab, a platform to experiment on distributed computing over broadband domestic Internet. Experimental platforms such as PlanetLab and Grid'5000 are promising methodological approaches to study distributed systems. However, both platforms focus on high-end service and network deployments only available on a restricted part of the Internet, leaving aside the possibility for researchers to experiment in conditions close to what is usually available with domestic connection to the Internet. DSL-Lab is a complementary approach to PlanetLab and Grid'5000 to experiment with distributed computing in an environment closer to how Internet appears, when applications are run on end-user PCs. DSL-Lab is a set of 40 low-power and low-noise nodes, which are hosted by participants, using the participants' xDSL or cable access to the Internet. The objective is to provide a validation and experimentation platform for new protocols, services, simulators and emulators for these systems. In this paper, we report on the software design (security, resources allocation, power management) as well as on the first experiments achieved

    Improved functionalization of oleic acid-coated iron oxide nanoparticles for biomedical applications

    Get PDF
    Superparamagnetic iron oxide nanoparticles can providemultiple benefits for biomedical applications in aqueous environments such asmagnetic separation or magnetic resonance imaging. To increase the colloidal stability and allow subsequent reactions, the introduction of hydrophilic functional groups onto the particles’ surface is essential. During this process, the original coating is exchanged by preferably covalently bonded ligands such as trialkoxysilanes. The duration of the silane exchange reaction, which commonly takes more than 24 h, is an important drawback for this approach. In this paper, we present a novel method, which introduces ultrasonication as an energy source to dramatically accelerate this process, resulting in high-quality waterdispersible nanoparticles around 10 nmin size. To prove the generic character, different functional groups were introduced on the surface including polyethylene glycol chains, carboxylic acid, amine, and thiol groups. Their colloidal stability in various aqueous buffer solutions as well as human plasma and serum was investigated to allow implementation in biomedical and sensing applications.status: publishe

    Desktop grids use the free resources in Intranet and Internet environment...

    No full text
    Desktop grids use the free resources in Intranet and Internet environments for large-scale computation and storage. While desktop grids offer a high return on investment, one critical issue is the validation of results returned by participating hosts. Several mechanisms for result validation have been previously proposed. However, the characterization of errors is poorly understood. To study error rates, we implemented and deployed a desktop grid application across several thousand hosts distributed over the Internet. We then analyzed the results to give quantitative, empirical characterization of errors rates. We find that in practice, error rates are widespread across hosts but occur relatively infrequently. Moreover, we find that error rates tend to not be stationary over time nor correlated between hosts. In light of these characterization results, we evaluated state-of-the-art error detection mechanisms and describe the trade-offs for using each mechanism. Finally, based on our empirical results, we conduct a benefit analysis of a mechanism that we proposed recently for the detection of errors in long-running applications. This mechanism is based on using the digest of intermediate checkpoints, and we show in theory and simulation that the relative benefi
    corecore